56 research outputs found

    Guided Hybrid Quantization for Object detection in Multimodal Remote Sensing Imagery via One-to-one Self-teaching

    Full text link
    Considering the computation complexity, we propose a Guided Hybrid Quantization with One-to-one Self-Teaching (GHOST}) framework. More concretely, we first design a structure called guided quantization self-distillation (GQSD), which is an innovative idea for realizing lightweight through the synergy of quantization and distillation. The training process of the quantization model is guided by its full-precision model, which is time-saving and cost-saving without preparing a huge pre-trained model in advance. Second, we put forward a hybrid quantization (HQ) module to obtain the optimal bit width automatically under a constrained condition where a threshold for distribution distance between the center and samples is applied in the weight value search space. Third, in order to improve information transformation, we propose a one-to-one self-teaching (OST) module to give the student network a ability of self-judgment. A switch control machine (SCM) builds a bridge between the student network and teacher network in the same location to help the teacher to reduce wrong guidance and impart vital knowledge to the student. This distillation method allows a model to learn from itself and gain substantial improvement without any additional supervision. Extensive experiments on a multimodal dataset (VEDAI) and single-modality datasets (DOTA, NWPU, and DIOR) show that object detection based on GHOST outperforms the existing detectors. The tiny parameters (<9.7 MB) and Bit-Operations (BOPs) (<2158 G) compared with any remote sensing-based, lightweight or distillation-based algorithms demonstrate the superiority in the lightweight design domain. Our code and model will be released at https://github.com/icey-zhang/GHOST.Comment: This article has been delivered to TRGS and is under revie

    SuperYOLO: Super Resolution Assisted Object Detection in Multimodal Remote Sensing Imagery

    Full text link
    Accurately and timely detecting multiscale small objects that contain tens of pixels from remote sensing images (RSI) remains challenging. Most of the existing solutions primarily design complex deep neural networks to learn strong feature representations for objects separated from the background, which often results in a heavy computation burden. In this article, we propose an accurate yet fast object detection method for RSI, named SuperYOLO, which fuses multimodal data and performs high-resolution (HR) object detection on multiscale objects by utilizing the assisted super resolution (SR) learning and considering both the detection accuracy and computation cost. First, we utilize a symmetric compact multimodal fusion (MF) to extract supplementary information from various data for improving small object detection in RSI. Furthermore, we design a simple and flexible SR branch to learn HR feature representations that can discriminate small objects from vast backgrounds with low-resolution (LR) input, thus further improving the detection accuracy. Moreover, to avoid introducing additional computation, the SR branch is discarded in the inference stage, and the computation of the network model is reduced due to the LR input. Experimental results show that, on the widely used VEDAI RS dataset, SuperYOLO achieves an accuracy of 75.09% (in terms of mAP50 ), which is more than 10% higher than the SOTA large models, such as YOLOv5l, YOLOv5x, and RS designed YOLOrs. Meanwhile, the parameter size and GFLOPs of SuperYOLO are about 18 times and 3.8 times less than YOLOv5x. Our proposed model shows a favorable accuracy and speed tradeoff compared to the state-of-the-art models. The code will be open-sourced at https://github.com/icey-zhang/SuperYOLO.Comment: The article is accepted by IEEE Transactions on Geoscience and Remote Sensin

    Research on road surface temperature characteristics and road ice warning model of ordinary highways in winter in Hunan province, central China

    Get PDF
    The study of road surface temperature (Ts) characteristics in winter and the early warning method of road icing is of great significance to reduce traffic accidents and improve transportation efficiency. Using the hourly observation data of Hunan traffic meteorological stations from December 2020 to February 2022, this study analyzes the winter Ts characteristics of ordinary roads in Hunan Province, and uses the Logistic regression model to establish the temperature threshold for icing of ordinary roads in the province. So as to build a road icing early warning model hierarchically. The results show that the Ts in southern Hunan is relatively high, the Ts at most stations is above 10 °C, and the low Ts area is in western Hunan, and the stations below 8 °C are mostly distributed in this area. This may be due to the higher altitude in western Hunan. In terms of diurnal variation, the lowest value of average Ts and air temperature (Ta) in Hunan Province in winter both appeared at 7:00 Beijing Time (BT), while the highest value appeared at 15:00 BT, and the average Ta is always lower than the Ts. The temperature variation on the bridge surface is more pronounced. When the Ta is lower than −2.5 °C, more than 70% of the sites have a rapid increase in the risk of icing; and when the Ta is lower than −5°C, nearly 87% of the sites have a risk level of 4, which means icing risk is extremely high. Furthermore, combining the warning model with thermal spectrum mapping can improve the spatial resolution of the warning model and also solve the problem of lack of observations in some areas

    Hyperspectral Image Super-Resolution Using Deep Feature Matrix Factorization

    No full text

    DDLPS: Detail-Based Deep Laplacian Pansharpening for Hyperspectral Imagery

    No full text

    CD4 T cell deficiency attenuates ischemic stroke, inhibits oxidative stress, and enhances Akt/mTOR survival signaling pathways in mice

    No full text
    Abstract Background Inhibition of CD4 T cells reduces stroke-induced infarction by inhibiting neuroinflammation in the ischemic brain in experimental stroke. Nevertheless, little is known about its effects on neuronal survival signaling pathways. In this study, we investigated the effects of CD4 T cell deficits on oxidative stress and on the Akt/mTOR cell signaling pathways after ischemic stroke in mice. Methods MHC II gene knockout C57/BL6 mice, with significantly decreased CD4 T cells, were used. Stroke was induced by 60-min middle cerebral artery (MCA) occlusion. Ischemic brain tissues were harvested for Western blotting. Results The impairment of CD4 T cell production resulted in smaller infarction. The Western blot results showed that iNOS protein levels robustly increased at 5 h and 24 h and then returned toward baseline at 48 h in wild-type mice after stroke, and gene KO inhibited iNOS at 5 h and 24 h. In contrast, the anti-inflammatory marker, arginase I, was found increased after stroke in WT mice, which was further enhanced in the KO mice. In addition, stroke resulted in increased phosphorylated PTEN, Akt, PRAS40, P70S6, and S6 protein levels in WT mice, which were further enhanced in the animals whose CD4 T cells were impaired. Conclusion The impairment of CD4 T cell products prevents ischemic brain injury, inhibits inflammatory signals, and enhances the Akt/mTOR cell survival signaling pathways

    An Improved Pulse-Coupled Neural Network Model for Pansharpening

    No full text
    Pulse-coupled neural network (PCNN) and its modified models are suitable for dealing with multi-focus and medical image fusion tasks. Unfortunately, PCNNs are difficult to directly apply to multispectral image fusion, especially when the spectral fidelity is considered. A key problem is that most fusion methods using PCNNs usually focus on the selection mechanism either in the space domain or in the transform domain, rather than a details injection mechanism, which is of utmost importance in multispectral image fusion. Thus, a novel pansharpening PCNN model for multispectral image fusion is proposed. The new model is designed to acquire the spectral fidelity in terms of human visual perception for the fusion tasks. The experimental results, examined by different kinds of datasets, show the suitability of the proposed model for pansharpening

    Hyperspectral Pansharpening Based on Spectral Constrained Adversarial Autoencoder

    No full text
    Hyperspectral (HS) imaging is conducive to better describing and understanding the subtle differences in spectral characteristics of different materials due to sufficient spectral information compared with traditional imaging systems. However, it is still challenging to obtain high resolution (HR) HS images in both the spectral and spatial domains. Different from previous methods, we first propose spectral constrained adversarial autoencoder (SCAAE) to extract deep features of HS images and combine with the panchromatic (PAN) image to competently represent the spatial information of HR HS images, which is more comprehensive and representative. In particular, based on the adversarial autoencoder (AAE) network, the SCAAE network is built with the added spectral constraint in the loss function so that spectral consistency and a higher quality of spatial information enhancement can be ensured. Then, an adaptive fusion approach with a simple feature selection rule is induced to make full use of the spatial information contained in both the HS image and PAN image. Specifically, the spatial information from two different sensors is introduced into a convex optimization equation to obtain the fusion proportion of the two parts and estimate the generated HR HS image. By analyzing the results from the experiments executed on the tested data sets through different methods, it can be found that, in CC, SAM, and RMSE, the performance of the proposed algorithm is improved by about 1.42%, 13.12%, and 29.26% respectively on average which is preferable to the well-performed method HySure. Compared to the MRA-based method, the improvement of the proposed method in in the above three indexes is 17.63%, 0.83%, and 11.02%, respectively. Moreover, the results are 0.87%, 22.11%, and 20.66%, respectively, better than the PCA-based method, which fully illustrated the superiority of the proposed method in spatial information preservation. All the experimental results demonstrate that the proposed method is superior to the state-of-the-art fusion methods in terms of subjective and objective evaluations

    A Novel Effectively Optimized One-Stage Network for Object Detection in Remote Sensing Imagery

    No full text
    With great significance in military and civilian applications, the topic of detecting small and densely arranged objects in wide-scale remote sensing imagery is still challenging nowadays. To solve this problem, we propose a novel effectively optimized one-stage network (NEOON). As a fully convolutional network, NEOON consists of four parts: Feature extraction, feature fusion, feature enhancement, and multi-scale detection. To extract effective features, the first part has implemented bottom-up and top-down coherent processing by taking successive down-sampling and up-sampling operations in conjunction with residual modules. The second part consolidates high-level and low-level features by adopting concatenation operations with subsequent convolutional operations to explicitly yield strong feature representation and semantic information. The third part is implemented by constructing a receptive field enhancement (RFE) module and incorporating it into the fore part of the network where the information of small objects exists. The final part is achieved by four detectors with different sensitivities accessing the fused features, all four parallel, to enable the network to make full use of information of objects in different scales. Besides, the Focal Loss is set to enable the cross entropy for classification to solve the tough problem of class imbalance in one-stage methods. In addition, we introduce the Soft-NMS to preserve accurate bounding boxes in the post-processing stage especially for densely arranged objects. Note that the split and merge strategy and multi-scale training strategy are employed in training. Thorough experiments are performed on ACS datasets constructed by us and NWPU VHR-10 datasets to evaluate the performance of NEOON. Specifically, 4.77% and 5.50% improvements in mAP and recall, respectively, on the ACS dataset as compared to YOLOv3 powerfully prove that NEOON can effectually improve the detection accuracy of small objects in remote sensing imagery. In addition, extensive experiments and comprehensive evaluations on the NWPU VHR-10 dataset with 10 classes have illustrated the superiority of NEOON in the extraction of spatial information of high-resolution remote sensing images
    • …
    corecore